Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from cloud-bulldozer:master #8

Open
wants to merge 256 commits into
base: master
Choose a base branch
from

Conversation

pull[bot]
Copy link

@pull pull bot commented Mar 22, 2021

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

@pull pull bot added the ⤵️ pull label Mar 23, 2021
HughNhan and others added 29 commits March 23, 2021 18:07
Model-S supports "node_range: [n, m]" and "density_range: [x, y]"
to specify enumeration by both dimension.
…rk context.

Fix last commit which introduced node_range and density_range but inadvertedly
still used min_node and max_node for condition check i.e "when: max_node is defined"
…a correlation.

While at it beefup a few place with "when: xxx is defined" for robustness
Valid step_size values are: addN or log2. N can be any decimal number.
enter main loop (due to start-True) then acquire pod_idx and node_idx.
They miss out the restart state, and run prematurely
This fits in with the idea that we are viewing worker nodes as just a pool
of resources and not by individual hostnames. This gives the flexibility for
users to isolate the tests to only one hardware model of workers as well (since
each model can be labelled with its model name).

Also there was a bug in previous code where only the first node in the list
was technically being excluded in `workload_args.excluded_node[0]`.

Signed-off-by: Sai Sindhur Malleni <[email protected]>
  Increase redis-server to 2 CPU.
  Add sleep's to workload/uperf clients
Point operator resource spec to compatible image.
Signed-off-by: Sai Sindhur Malleni <[email protected]>
***Important***
 pin=true/false
   "pin=true" will run Pin mode
   "pin=false" will run Scale mode
 In Pin mode, "pair=n" is now obsolete. Use denstiy_range[] instead

Scale mode The default values are:
   node_range=[1,1], density_range=1,1], step_size=add1 and colcate=false
Integrated "serviceip" into scale framework. Was not working until this commit.
Cosmetic cleanup i.e debug TASK tags, and doublle "when" guards.
  a. Keep 'pair' for pin mode
  b. Remove custom image URLs
2. Rebase to upstream
vishnuchalla and others added 30 commits July 12, 2022 18:23
* initial draft of standard uperf with new features

* bug fix for client creation in nodeport service

* bug fix for client creation conflict due to same name in metadata

* initial draft for uperf-scale mode

* removed uperf to avoid confusion

* added index for metallb service labels

* fix for client port mismatch with server and services

* fixed es index issues

* fixed services issue

* bug fix for clusterip service ports mismatch

* reverting quay repo org to cloudbulldozer

* fix for service ip usecases except for nodeport

* fixed ports issues for nodeport service use case

* added test for uperf_scale

* restoring image repo to original

* removing bash eternal history

* removed test configuration

* fix for client start issue caused by redis variables for vm use case

* renamed client_vm to client_vms for vm kind

* fixing redis variables for pod use cases as well

* fixed typo

* resolving PR comments

* removed an old comment

Co-authored-by: root <[email protected]>
Co-authored-by: root <[email protected]>
Co-authored-by: Murali Krishnasamy <[email protected]>
With FIO we need to wait for server pods to be annotated with IP
addresses. This fix adds that wait.

closes #789

Signed-off-by: Joe Talerico <[email protected]>
* Affinity rules

Use podAntiaffinity rules to place uperf client pods in different worker
nodes where the servers are running on

Use podAffinity rules to place all client and server pods in the same node, useful when running more than one pair
All client pods will be scheduled in the same node and servers in another

Signed-off-by: Raul Sevilla <[email protected]>

* Update docs

Signed-off-by: Raul Sevilla <[email protected]>
* updated README.md

Spelling typos!

* Update README.md

Another typo

* Update README.md
* SCTP support for services

* Added a test now that e2e cluster supports SCTP

* Added a pod2pod test
Set benchmark-operator's affinity to preferably be scheduled on workload
nodes

Signed-off-by: Raul Sevilla <[email protected]>
* Adding option to run tests with pvcvolumemode: Block

* Running FIO tests with pvcvolumemode set as Block

* Update kustomization.yaml
Signed-off-by: Huamin Chen <[email protected]>

Signed-off-by: Huamin Chen <[email protected]>
Have not tested using `kind: vm`
…d, previous behavior is default (#799)

Added support for prepare, run, and cleanup phase options in sysbench fileio
* code snippets for creating nginx server side resources

* corrected typo in the workload name

* initial draft for nighthawk workload

* updated review comments (#1)

* added default values for all workload_args

* restored files to avoid resolve conflicts

* adding readme and keeping default service as ClusterIP

* made changes to keep clusterIP default

---------

Co-authored-by: root <[email protected]>
Co-authored-by: Murali Krishnasamy <[email protected]>
Co-authored-by: Vishnu Challa <[email protected]>
…e set on e2e-benchmarking as ES_BACKEND_INDEX
Add es_index parameter to log-generator workload template as it can be set on e2e-benchmarking as ES_BACKEND_INDEX
1. add storageClassName in "spec" section for PVC
2. remove volume.beta.kubernetes.io/storage-class from annotations section
* zone affinity for clients

* updated affinity

* corrected the mandatory fields
CNV w/ OpenShift 4.15 was broken. This should fix it.

Signed-off-by: Joe Talerico <[email protected]>
)

using the "--no-cache-dir" flag in pip install, make sure downloaded packages by pip don't cache on the system. This is a best practice that makes sure to fetch from a repo instead of using a local cached one. Further, in the case of Docker Containers, by restricting caching, we can reduce image size. In terms of stats, it depends upon the number of python packages multiplied by their respective size. e.g for heavy packages with a lot of dependencies it reduces a lot by don't cache pip packages.

Further, more detailed information can be found at

https://medium.com/sciforce/strategies-of-docker-images-optimization-2ca9cc5719b6

Signed-off-by: Pratik Raj <[email protected]>
Signed-off-by: Vishnu Challa <[email protected]>
Co-authored-by: Vishnu Challa <[email protected]>
* Update kubevirt API version

Signed-off-by: Raul Sevilla <[email protected]>

* Wait for servers to be running before registering interfaces

Signed-off-by: Raul Sevilla <[email protected]>

* Centos 8 appstream is EOL: Use vault repository

Signed-off-by: Raul Sevilla <[email protected]>

---------

Signed-off-by: Raul Sevilla <[email protected]>
```
looking for "jobfile.j2" at "/opt/ansible/roles/stressng/templates/jobfile.j2"
File lookup using /opt/ansible/roles/stressng/templates/jobfile.j2 as file
fatal: [localhost]: FAILED! => {
    "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'cpu_method'. 'dict object' has no attribute 'cpu_method'. 'dict object' has no attribute 'cpu_method'. 'dict object' has no attribute 'cpu_method'\n\nThe error appears to be in '/opt/ansible/roles/stressng/tasks/main.yaml': line 4, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n  - name: template stressng config file\n    ^ here\n"
}
```
* uperf: wait for vms update

* Update base image

* Update pod starting clients check
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.